home *** CD-ROM | disk | FTP | other *** search
- The following Technical Reports are available from the International
- Computer Science Institute via anonymous FTP from icsi-ftp.Berkeley.EDU.
- They are located in ~ftp/pub/techreports and are all compressed, postscript
- files. If you experience difficulties, send mail to ftp-info@icsi.Berkeley.EDU.
- -------------------------------------------------------------------------------
- tr-89-032.ps.Z
-
- A Connectionist Model of Unification
- Andreas Stolcke
- TR-89-032
- May 1989
-
- A general approach to encode and unify recursively
- nested feature structures in connectionist networks is
- described. The unification algorithm implemented by
- the net is based on iterative coarsening of equivalence
- classes of graph nodes. This method allows the
- reformulation of unification as a constraint
- satisfaction problem and enables the connectionist
- implementation to take full advantage of the potential
- parallelism inherent in unification, resulting in
- sublinear time complexity. Moreover, the method is
- able to process any number of feature structures in
- parallel, searching for possible unifications and
- making decisions among mutually exclusive unifications
- where necessary.
-
- Keywords and phrases. Unification, constraint
- satisfaction, connectionism, feature structures.
-
- -------------------------------------------------------------------------------
- tr-90-009.ps.Z
-
- Miniature Language Acquisition: A touchstone for cognitive science
- Jerome A. Feldman, George Lakoff, Andreas Stolcke,
- and Susan Hollbach Weber
- TR-90-009
- March 1990 (revised April 1990)
-
- Cognitive Science, whose genesis was interdisciplinary,
- shows signs of reverting to a disjoint collection of
- fields. This paper presents a compact, theory-free
- task that inherently requires an integrated solution.
- The basic problem is learning a subset of an arbitrary
- natural language from picture-sentence pairs. We
- describe a very specific instance of this task and show
- how it presents fundamental (but not impossible)
- challenges to several areas of cognitive science
- including vision, language, inference and learning.
-
- -------------------------------------------------------------------------------
- tr-90-010.ps.Z
-
- L0: A Testbed for Miniature Language Acquisition
- Susan Hollbach Weber and Andreas Stolcke
- TR-90-010
- July 1990
-
- L0 constitutes a recent effort in Cognitive Science to
- build a natural language acquisition system for a
- limited visual domain. As a preparatory step towards
- addressing the issue of learning in this domain, we
- have built a set of tools for rapid prototyping and
- experimentation in the areas of language processing,
- image processing, and knowledge representation. The
- special focus of our work was the integration of these
- different components into a flexible system which would
- allow us to better understand the domain given by L0
- and experiment with alternative approaches to the
- problems it poses.
-
- -------------------------------------------------------------------------------
- tr-90-011.ps.Z
-
- A Network for Extracting the Locations of Point Clusters
- Using Selective Attention
-
- Subutai Ahmad and Stephen Omohundro
- ahmad@icsi.berkeley.edu and om@icsi.berkeley.edu
- TR 90-011
-
- This report explores the problem of dynamically
- computing visual relations in connectionist systems.
- It concentrates on the task of learning whether
- three clumps of points in a 256x256 image form an
- equilateral triangle. We argue that feed-forward
- networks for solving this task would not scale well
- to images of this size. One reason for this is that
- local information does not contribute to the
- solution: it is necessary to compute relational
- information such as the distances between points.
- Our solution implements a mechanism for dynamically
- extracting the locations of the point clusters. It
- consists of an efficient focus of attention
- mechanism and a cluster detection scheme. The focus
- of attention mechanism allows the system to select
- any circular portion of the image in constant time.
- The cluster detector directs the focus of attention
- to clusters in the image. These two mechanisms are
- used to sequentially extract the relevant
- coordinates. With this new representation
- (locations of the points) very few training examples
- are required to learn the correct function. The
- resulting network is also very compact: the number
- of required weights is proportional to the number of
- input pixels.
-
- -------------------------------------------------------------------------------
- tr-90-015.ps.Z
-
- Learning Feature-based Semantics with Simple Recurrent Networks
- Andreas Stolcke
- TR-90-015
- April 1990
-
- The paper investigates the possibilities for using
- simple recurrent networks as transducers which map
- sequential natural language input into non-sequential
- feature-based semantics. The networks perform well on
- sentences containing a single main predicate (encoded
- by transitive verbs or prepositions) applied to
- multiple-feature objects (encoded as noun-phrases with
- adjectival modifiers), and shows robustness against
- ungrammatical inputs. A second set of experiments
- deals with sentences containing embedded structures.
- Here the network is able to process multiple levels of
- sentence-final embeddings but only one level of
- center-embedding. This turns out to be a consequence
- of the network's inability to retain information that
- is not reflected in the outputs over intermediate
- phases of processing. Two extensions to Elman's
- \shortcite{Elman:88} original recurrent network
- architecture are introduced.
-
- -------------------------------------------------------------------------------
- tr-91-030.ps.Z
-
- Probability estimation by feed-forward networks in continuous speech
- recognition
- Steve Renals, Nelson Morgan and Herve Bourlard
- TR-91-030
- August 1991
-
- We review the use of feed-forward networks as estimators of
- probability densities in hidden Markov modelling. In this
- paper we are mostly concerned with radial basis functions
- (RBF) networks. We note the isomorphism of RBF networks to
- tied mixture density estimators; additionally we note that
- RBF networks are trained to estimate posteriors rather than
- the likelihoods estimated by tied mixture density
- estimators. We show how the neural network training should
- be modified to resolve this mismatch. We also discuss
- problems with discriminative training, particularly the
- problem of dealing with unlabelled training data and the
- mismatch between model and data priors.
-
- -------------------------------------------------------------------------------
- tr-91-031.ps.Z
-
- pSather monitors: Design, Tutorial, Rationale and Implementation
- Jerome A. Feldman, Chu-Cheow Lim and Franco Mazzanti
- TR-91-031
- September 1989
-
- Sather is a new object-oriented programming language
- under development at the International Computer
- Science Institute. The initial beta test release of
- the language was in June, 1991. From the outset, one
- goal of the Sather project has been the incorporation
- of constructs to support parallel programming.
-
- pSather is a parallel extension of Sather aimed at
- shared memory parallel architectures. A prototype of
- the language is currently being implemented on a
- Sequent Symmetry and on SUN Sparc-Stations.
- pSather monitors are one of the basic new features
- introduced in the language to deal with parallelism.
- The current design is presented and discussed in
- detail.
-
- -------------------------------------------------------------------------------
- tr-91-032.ps.Z
-
- GAL: Networks that grow when they learn and shrink when they forget
- Ethem Alpaydin
- TR 91-032
-
- Learning when limited to modification of some
- parameters has a limited scope; the capability to
- modify the system structure is also needed to get a
- wider range of the learnable. In the case of
- artificial neural networks, learning by iterative
- adjustment of synaptic weights can only succeed if
- the network designer predefines an appropriate network
- structure, i.e., number of hidden layers, units,
- and the size and shape of their receptive and
- projective fields. This paper advocates the view
- that the network structure should not, as usually
- done, be determined by trial-and-error but should be
- computed by the learning algorithm. Incremental
- learning algorithms can modify the network structure
- by addition and/or removal of units and/or links. A
- survey of current connectionist literature is given on
- this line of thought. ``Grow and Learn'' (GAL) is a
- new algorithm that learns an association at one-shot
- due to being incremental and using a local
- representation. During the so-called ``sleep''
- phase, units that were previously stored but which
- are no longer necessary due to recent modifications
- are removed to minimize network complexity.
- The incrementally constructed network can later
- be finetuned off-line to improve performance.
- Another method proposed that greatly increases
- recognition accuracy is to train a number of
- networks and vote over their responses. The
- algorithm and its variants are tested on
- recognition of handwritten numerals and seem
- promising especially in terms of learning speed.
- This makes the algorithm attractive for on-line
- learning tasks, e.g., in robotics. The
- biological plausibility of incremental learning is
- also discussed briefly.
-
- Keywords
- Incremental learning, supervised learning,
- classificomion, pruning, destructive methods, growth,
- constructive methods, nearest neighbor.
-
- -------------------------------------------------------------------------------
- tr-91-034.ps.Z
-
- Sather Language Design and Performance Evaluation
- Chu-Cheow Lim and Andreas Stolcke%
- TR-91-034
-
- Sather is an object-oriented language recently designed
- and implemented at the International Computer Science
- Institute in Berkeley. It compiles into C and is
- intended to allow development of object-oriented,
- reusable software while retaining C's efficiency and
- portability. We investigate to what extent these goals
- were met through a comparative performance study and
- analysis of Sather and C programs on a RISC machine.
- Several language design decisions in Sather are
- motivated by the goal of efficient compilation to
- standard architectures. We evaluate the reasoning
- behind these decisions, using instruction set usage
- statistics, cache simulations, and other data collected
- by instrumented Sather-generated code.
-
- We conclude that while Sather users still pay a
- moderate overhead for programming convenience (in both
- run time and memory usage) the overall CPU and memory
- usage profiles of Sather programs are virtually
- identical to those of comparable C programs. Our
- analysis also shows that each of the choices made in
- Sather design and implementation is well justified by a
- distinctive performance advantage. It seems, then,
- that Sather proves the feasibility of its own design
- goal of making object-oriented programming efficient on
- standard architectures using a combination of judicious
- language design and efficient implementation.
- -------------------------------------------------------------------------------
- tr-91-035.ps.Z
-
- HiPNeT-1 :
- A Highly Pipelined Architecture for Neural Network Training
- Krste Asanovic, Brian E. D. Kingsbury, Nelson Morgan,
- and John Wawrzynek
- TR-91-035
- June 1991
-
- Current artificial neural network (ANN) algorithms
- require extensive computational resources. However,
- they exhibit massive fine-grained parallelism and
- require only moderate arithmetic precision. These
- properties make possible custom VLSI implementations
- for high performance, low cost systems. This paper
- describes one such system, a special purpose digital
- VLSI architecture to implement neural network training
- in a speech recognition application.
-
- The network algorithm has a number of atypical
- features. These include: shared weights, sparse
- activation, binary inputs, and a serial training input
- stream. The architecture illustrates a number of
- design techniques to exploit these algorithm-specific
- features. The result is a highly pipelined system
- which sustains a learning rate of one pattern per
- clock cycle. At a clock rate of 20MHz each "neuron"
- site performs 200 million connection updates per
- second. Multiple such neurons can be integrated onto
- a modestly sized VLSI die.
-
-
- -------------------------------------------------------------------------------
- tr-91-036.ps.Z
-
- Experimental Determination of Precision Requirements for
- Back-Propagation Training of Artificial Neural Networks
-
- Krste Asanovic and Nelson Morgan
- TR-91-036
- June 1991
-
- The impact of reduced weight and output precision on
- the back-propagation training algorithm is
- experimentally determined for a feed-forward
- multi-layer perceptron. In contrast with previous
- such studies, the network is large with over 20,000
- weights, and is trained with a large, real-world data
- set of over 130,000 patterns to perform a difficult
- task, that of phoneme classification for a continuous
- speech recognition system.
-
- The results indicate that 16b weight values are
- sufficient to achieve training and classification
- results comparable to 32b floating point, provided
- that weight and bias values are scaled separately, and
- that rounding rather than truncation is employed to
- reduce the precision of intermediary values. Output
- precision can be reduced to 8 bits without significant
- effects on performance.
-
- -------------------------------------------------------------------------------
- tr-91-048.ps.Z
-
- ICSIM: An Object-Oriented Connectionist Simulator
- Hienz W. Schmidt, Benedict Gomes
- TR-91-048
- November 1991
-
-
- ICSIM is a connectionist net simulator under development
- at ICSI and written in Sather. It is object-oriented
- to meet the requirements for flexibility and reuse of
- homogeneous and structured connectionist nets and to
- allow the user to encapsulate efficient customized
- implementations perhaps running on dedicated hardware.
- Nets are composed by combining off-the-shelf library
- classes and, if necessary, by specializing some of their
- behaviour. General user interface classes allow a
- uniform or customized graphic presentation of the nets
- being modeled. The report gives an overview of the
- simulator. Its main concepts, the class structure of
- its library and some of the design decisions are
- sketched and a number of example nets are used to
- illustrate how net structure, interconnection and
- behavior are defined.
- -------------------------------------------------------------------------------
- tr-91-050.ps.Z
-
- Learning Spatial Concepts Using a Partially-Structured
- Connectionist Architecture
- Terry Regier
- TR-91-050
- October 1991
-
- This paper reports on the learning of spatial concepts in
- the L0 project. The challenge of designing an architecture
- capable of learning spatial concepts from any of the world's
- languages is first highlighted by reviewing the spatial
- systems of a number of languages which differ strikingly
- from English in this regard. A partially structured
- connectionist architecture is presented which has
- successfully learned concepts from the languages outlined.
- In this architecture, highly structured subnetworks,
- specialized for the spatial concept learning task, feed
- into an unstructured, fully-connected upper subnetwork.
- The system's success at the learning task is attributed on
- the one hand to the constrained search space which results
- from structuring, and on the other hand to the flexibility
- afforded by the unstructured upper subnetwork.
- -------------------------------------------------------------------------------
- tr-91-058.ps.Z
-
- Detecting Skewed Symmetries
- Stefan Posch
- TR-91-058
- October 1991
-
- Many surfaces of objects in our world are bounded by
- planar bilaterally symmetric figures. When these figures
- are imaged under orthographic projection a skewed symmetric
- contour results. In this paper a new fast, local method
- to recover skewed symmetries from curve segments is proposed.
- It can be applied to complete as well as to occluded
- contours. Furthermore, the skewed symmetry property is
- employed to overcome fragmentation of a contour during
- segmentation.
- -------------------------------------------------------------------------------
- tr-91-059.ps.Z
-
- Line Labeling Using Markov Random Fields
- Terry Regier
- TR-91-059
- November 1991
-
- The task of obtaining a line labeling from a greyscale
- image of trihedral objects presents difficulties not
- found in the classical line labeling problem. As originally
- formulated, the line labeling problem assumed that each
- junction was correctly pre-classified as being of a
- particular junction type (e.g. T, Y, arrow); the success
- of the algorithms proposed have depended critically upon
- getting this initial junction classification correct. In
- real images, however, junctions of different types may
- actually look quite similar, and this pre-classification
- is often difficult to achieve. This issue is addressed by
- recasting the line labeling problem in terms of a coupled
- probabilistic system which labels both lines and junctions.
- This results in a robust system, in which prior knowledge
- of acceptable configurations can serve to overcome the
- problem of misleading or ambiguous evidence.
-
-
- -------------------------------------------------------------------------------
- tr-91-061.ps.Z
-
- Combinatory Differential Fields: An Algebraic Approach to
- Approximate Computation and Constructive Analysis
- Karl Aberer
- TR-91-061
- October 1991
-
- The algebraic structure of combinatory differential fields
- is constructed to provide a semantics for computations in
- analysis. In this setting programs, approximations, limits
- and operations of analysis are represented as algebraic
- terms. Analytic algorithms can be derived by algebraic
- methods. The main tool in this construction are combinatory
- models which are inner algebras of Engeler graph models. As
- an universal domain of denotational semantics the lattice
- structure of the graph models allows to give a striking
- simple semantics for computations with approximations. As
- models of combinatory algebra they provide all essential
- computational constructs, including recursion. Combinatory
- models are constructed as extensions of first order theories.
- The classical first order theory to describe analysis is the
- theory of differential fields. It turns out that two types of
- computational constructs, namely composition and piecewise
- definition of functions, are preferably introduced as
- extensions of the differential fields theory. Combinatory
- differential fields are then the combinatory models of these
- enriched differential fields. We show for basic algorithms
- of computational analysis how their combinatory counterparts
- are derived in the algebraic setting. We illustrate how these
- algorithms are suitable to be implemented in a computer
- algebra environment like mathematica.
- -------------------------------------------------------------------------------
- tr-91-062.ps.Z
-
- Self-Testing/Correcting with Applications to Numerical Problems
- (Revised Version)}
-
- Manuel Blum, Michael Luby, Ronitt Rubinfeld
- TR-91-062
-
- Suppose someone gives us an extremely fast program $P$
- that we can call as a black box to compute a function $f$.
- Should we trust that $P$ works correctly?
- A {\em self-testing/correcting pair} for $f$ allows us to:
- (1) estimate the probability that $P(x) \not= f(x)$ when $x$ is
- randomly chosen; (2) on {\em any} input $x$, compute $f(x)$
- correctly as long as $P$ is not too faulty on average.
- Furthermore, both (1) and (2) take time only slightly more
- than the original running time of $P$.
-
- We present general techniques for constructing simple to
- program self-testing/\-correcting pairs for a variety of
- numerical functions, including integer multiplication,
- modular multiplication, matrix multiplication,
- inverting matrices, computing the determinant
- of a matrix, computing the rank of a matrix, integer division,
- modular exponentiation and polynomial multiplication.
- -------------------------------------------------------------------------------
- tr-91-064.ps.Z
-
- Distortion Accumulation in Image Transform Coding/Decoding Cascades
- Michael Gilge
- TR-91-064
- December 1991
-
- With an increasing number of applications that employ
- transform coding algorithms for data reduction, the effect
- of distortion accumulation caused by multiple coding needs
- to be investigated. Multiple coding occurs when more than
- one coding system is connected in a cascade. From the
- second stage on, the coding algorithm operates on data that
- has been previously coded/decoded. First a generic image
- communication system is being modelled and situations that
- can lead to distortion accumulation are analyzed. These
- results show two main reasons for distortion accumulation,
- which are separately and jointly investigated using a
- JPEG-type compression algorithm. The first situation
- involves geometric operations between the decoding and next
- coding step. Measurements show however that these spatial
- manipulations are the main contributors to distortion
- accumulation. The second reason for distortion accumulation
- is a misalignment of the block segmentation reference point
- in subsequent transform operations. A block raster
- detection algorithm is derived that can find the position
- of the block raster that was introduced in a previous
- coding step. If this information is used in the block
- segmentation of the following coding step, distortion
- accumulation can be avoided. Simulation results are given
- for an extended algorithm that registers regions of
- homogeneous block raster in images consisting of several
- subimages.
- -------------------------------------------------------------------------------
- tr-91-065.ps.Z
-
- Motion Video Coding for Packet-Switching Networks
- -- An Integrated Approach
- Michael Gilge and Riccardo Gusella
- TR-91-065
- December 1991
-
- The advantages of packet video, constant image quality,
- service integration and statistical multiplexing, are
- overshadowed by packet loss, delay and jitter. By
- integrating network-control into the image data
- compression algorithm, the strong interactions between the
- coder and the network can be exploited and the available
- network bandwidth can be used best. In order to enable
- video transmission over today's networks without
- reservation or priorities and in the presence of high
- packet loss rates, congestion avoidance techniques need to
- be employed. This is achieved through rate and flow
- control, where feedback from the network is used to adapt
- coding parameters and vary the output rate. From the
- coding point of view the network is seen as data buffer.
- Analogously to constant bit rate applications, where a
- controller measures buffer fullness, we attempt to avoid
- network congestion (eq. buffer overflow) by monitoring the
- network and adapting the coding parameters in real-time.
- -------------------------------------------------------------------------------
- tr-91-068.ps.Z
-
- Construction of a pseudo-random generator from any one-way function
-
- Johan Hastad, Russell Impagliazzo, Leonid A. Levin, Michael Luby
- TR-91-068
-
- We show how to construct a pseudo-random generator
- from any one-way function. In contrast, previous works
- have constructed pseudo-random generators only
- from one-way functions with special structural properties.
- Our overall approach is different in spirit from previous work;
- we concentrate on extracting and smoothing entropy from
- a single iteration of the one-way function using
- universal hash functions.
- -------------------------------------------------------------------------------
- tr-91-069.ps.Z
-
- RASTA-PLP Speech Analysis
-
- Hynek Hermansky, Nelson Morgan, Aruna Bayya, and Phil Kohn
- TR-91-069
- December 1991
-
- Most speech parameter estimation techniques are easily
- influenced by the frequency response of the communication
- channel. We have developed a technique that is more robust
- to such steady-state spectral factors in speech. The
- approach is conceptually simple and computationally efficient.
- The new method is described, and experimental results are
- reported, showing a significant advantage for the proposed
- method.
- -------------------------------------------------------------------------------
- tr-91-070.ps.Z
-
- Connectionist Speech Recognition: Status and Prospects
-
- Steve Renals, Nelson Morgan, Herve Bourlard, Michael Cohen,
- Horacio Franco, Chuck Wooters and Phil Kohn.
- TR-91-070
- December 1991
-
- We report on recent advances in the ICSI connectionist
- speech recognition project.
- Highlights include:
- 1. Experimental results showing that connectionist
- methods can improve the performance of a context
- independent maximum likelihood trained HMM system,
- resulting in a performance close to that achieved
- using state of the art context dependent HMM systems
- of much higher complexity;
- 2. Mixing (context independent) connectionist
- probability estimates with maximum likelihood trained
- context dependent models to improve the performance of
- a state of the art system;
- 3. The development of a network decomposition method
- that allows connectionist modelling of context
- dependent phones efficiently and parsimoniously, with
- no statistical independence assumptions.
- -------------------------------------------------------------------------------
- tr-91-071.ps.Z
-
- GDNN: A Gender-Dependent Neural Network
- for
- Continuous Speech Recognition
-
- Yochai Konig, Nelson Morgan, and Claudia Chandra
- TR-91-071
- December 1991
-
- Conventional speaker-independent speech recognition systems do not
- consider speaker-dependent parameters in the probability estimation
- of phonemes. These recognition systems are
- instead tuned to the ensemble statistics over many speakers.
- Most parametric representations of speech, however, are highly
- speaker dependent, and probability distributions suitable for
- a certain speaker may not perform as well for other speakers.
- It would be desirable to incorporate
- constraints on analysis that rely on the same speaker producing
- all the frames in an utterance. Our experiments take a first step
- towards this speaker consistency modeling by using a
- classification network to help generate gender-dependent
- phonetic probabilities for a statistical recognition system.
- Our results show a good classification rate for the gender
- classification net. Simple use of such a model to augment
- an existing larger network that estimates phonetic
- probabilities does not help speech recognition performance.
- However, when the new net is properly integrated in an HMM
- recognizer, it provides significant improvement in word accuracy.
- -------------------------------------------------------------------------------
- tr-91-072.ps.Z
-
- SPERT: A VLIW/SIMD Microprocessor for
- Artificial Neural Network Computations
-
- Krste Asanovic, James Beck, Brian E. D. Kingsbury, Phil Kohn,
- Nelson Morgan, John Wawrzynek
-
- TR-91-072
- December 1991
-
- SPERT (Synthetic PERceptron Testbed) is a fully programmable single
- chip microprocessor designed for efficient execution of artificial
- neural network algorithms. The first implementation will be in a
- 1.2 micron CMOS technology with a 50MHz clock rate, and a prototype
- system is being designed to occupy a double SBus slot within a Sun
- Sparcstation.
-
- SPERT will sustain over 300 million connections per second during
- pattern classification, and around 100 million connection updates
- per second while running the popular error backpropagation training
- algorithm. This represents a speedup of around two orders of magnitude
- over a Sparcstation-2 for algorithms of interest. An earlier system
- produced by our group, the Ring Array Processor (RAP), used commercial
- DSP chips. Compared with a RAP multiprocessor of similar performance,
- SPERT represents over an order of magnitude reduction in cost for
- problems where fixed-point arithmetic is satisfactory.
-
- This report describes the current architecture, and gives the
- results of detailed simulations. The report also makes a short
- comparison to other high-performance digital neurocomputing chips.
- -------------------------------------------------------------------------------
- tr-91-074.ps.Z
-
- Recent Work in VLSI Elements for Digital Implementations of
- Artificial Neural Networks
-
- Brian E. D. Kingsbury, Bertrand Irissou, Krste Asanovic,
- John Wawrzynek, Nelson Morgan
- TR-91-074
- December 1991
-
- A family of high-performance, area-efficient VLSI elements is
- being developed to simplify the design of artificial neural
- network processors. The libraries are designed around the
- MOSIS Scalable CMOS design rules, giving users the option of
- fabricating designs in 2.0um or 1.2um n-well processes, and
- greatly simplifying migration of the libraries to new MOSIS
- technologies. To date, libraries and generators have been
- created for saturating and nonsaturating adders, a
- two's-complement multiplier, and a triple-ported register
- file. The SPERT processor currently being designed at ICSI
- will be based upon these libraries, and is expected to run at
- 50 MHz when realized in a 1.2um CMOS technology.
- -------------------------------------------------------------------------------
- tr-92-005.ps.Z
-
- New algorithmic results for lines-in-3-space problems
-
- Leonidas J. Guibas and Marco Pellegrini
- TR-92-005
- January 1992
-
-
-
- In the first part of the report we consider some incidence and ordering
- problems for lines in 3-space. We solve the problem of detecting
- efficiently if a query simplex is collision-free among polyhedral
- obstacles. In order to solve this problem we develop new on-line data
- structures to detect intersections of query halfplanes with
- sets of lines and segments.
- Then, we consider the nearest-neighbor problems for lines.
- Given a set of$n$ lines in 3-space, the shortest vertical segment between
- any pair of lines is found in randomized expected time
- $O(n^{8/5+\epsilon})$ for every $\eps>0$.
- The longest connecting vertical segment is found in time $O(n^{4/3+\eps})$.
- The shortest connecting segment is found in time $O(n^{5/3 + \epsilon})$.
- Problems involving lines, points and spheres in 3-space have important
- applications in graphics, CAD and optimization.
- In the second part of the report we consider several problems of this kind.
- We give subquaratic algorithms to count the number of incidences
- between a set of lines and a set of spheres, and to find the minimum
- distance between a set of lines and a set of points.
- We show that the sphere of minimum radius intersecting every line in
- a set of $n$ lines can be found in optimal expected time $O(n)$.
- Given $m$ possibly intersecting spheres we solve ray-shooting queries
- in $O(\log^2 m)$ time using a data structure of size $O(m^{5+\eps})$.
- This technical report collects part of the second author's work
- at I.C.S.I. form September 1991 to January 1992.
- -------------------------------------------------------------------------------
-